Toward an Improved Downward Refinement Operator for Inductive Logic Programming
نویسنده
چکیده
In real-world supervised Machine Learning tasks, the learned theory can be deemed as valid only until there is evidence to the contrary (i.e., new observations that are wrongly classified by the theory). In such a case, incremental approaches allow to revise the existing theory to account for the new evidence, instead of learning a new theory from scratch. In many cases, positive and negative examples are provided in a mixed and unpredictable order, which requires generalization and specialization refinement operators to be available for revising the hypotheses in the existing theory when it is inconsistent with the new examples. The space of Datalog Horn clauses under the OI assumption allows the existence of refinement operators that fulfill desirable properties. However, the versions of these operators currently available in the literature are not able to handle some refinement tasks. The objective of this work is paving the way for an improved version of the specialization operator, aimed at extending its applicability.
منابع مشابه
Ideal Refinement of Datalog Clauses Using Primary Keys
Inductive Logic Programming (ILP) algorithms are frequently used to data mine multi-relational databases. However, in many ILP algorithms the use of primary key constraints is limited. We show how primary key constraints can be incorporated in a downward refinement operator. This refinement operator is proved to be finite, complete, proper and therefore ideal for clausal languages defined by pr...
متن کاملMacro-Operators in Multirelational Learning: A Search-Space Reduction Technique
Refinement operators are frequently used in the area of multirelational learning (Inductive Logic Programming, ILP) in order to search systematically through a generality order on clauses for a correct theory. Only the clauses reachable by a finite number of applications of a refinement operator are considered by a learning system using this refinement operator; ie. the refinement operator dete...
متن کاملGeneralizing Refinement Operators to Learn Prenex Conjunctive Normal Forms
Inductive Logic Programming considers almost exclusively universally quantied theories. To add expressiveness, prenex conjunctive normal forms (PCNF) with existential variables should also be considered. ILP mostly uses learning with refinement operators. To extend refinement operators to PCNF, we should first do so with substitutions. However, applying a classic substitution to a PCNF with exi...
متن کاملSorted Downward Reenement: Building Background Knowledge into a Reenement Operator for Inductive Logic Programming
Since its inception, the eld of inductive logic programming has been centrally concerned with the use of background knowledge in induction. Yet, surprisingly, no serious attempts have been made to account for background knowledge in reenement operators for clauses, even though such operators are one of the most important, prominent and widely-used devices in the eld. This paper shows how a sort...
متن کاملEfficient homomorphism-free enumeration of conjunctive queries
Many algorithms in the field of inductive logic programming rely on a refinement operator satisfying certain desirable properties. Unfortunately, for the space of conjunctive queries under θ-subsumption, no optimal refinement operator exists. In this paper, we argue that this does not imply that frequent pattern mining in this setting can not be efficient. As an example, we consider the problem...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014